70 research outputs found

    Strengthening of metals using a graphene monolayer

    Get PDF
    A practical route to exploiting graphene’s supreme properties for a variety of applications is to incorporate graphene layers in composite materials. Harnessing the high stiffness, intrinsic strength as well as transport properties of graphene in its composites requires the combination of high-quality graphene having low defect density, and the precise control of the interfacial interactions between the graphene and the matrix. These requirements equally hold for polymer and metal matrices, and enable the use of graphene in applications ranging from tough thin films for use in flexible electronics to the design of advanced aerospace structures. My dissertation addresses the synthesis, understanding and control of these composites and their mechanical properties probed from the nano- to the microscales. To this end, a model system of ultrathin metal films coated with graphene monolayer via chemical vapor deposition (CVD) is designed and used to study as-grown graphene’s contributions in graphene-metal composite thin films. Due to the thinness of the metal layer - typically less than 300 nm - individual or few graphene layers have a strong contribution on the composite thin film’s mechanics. To create the most ideal interface between the metal and the graphene, CVD synthesis is used to grow the graphene wrapping around the surface of the films. A highly dynamic CVD synthesis route is developed to achieve high-quality graphene monolayer growth on ultrathin metal films while avoiding solid-state dewetting instability which takes place at the extremely high synthesis temperatures. We study how the competition between temperature-driven segregation and precipitation of carbon radicals governs the graphene’s nucleation and growth kinetics on ultrathin metal catalysts. The result of the dynamic recipe is repeatable growth of graphene monolayers with ultralow defect density as confirmed by Raman spectroscopy. Precise mechanical characterization of ultrathin films is carried using various nanoindentation modalities including indentation of supported and freestanding thin films. CVD grown graphene-metal thin film composites exhibit unusual increase in the elastic modulus, strength and toughness. For example, there is 35 % and 57 % increases in the Young’s modulus and tensile strength in graphene-palladium thin film composites compared to those for a bare palladium film having a thickness of 66 nm. Notably, this enhancement exhibits scale effects, where the composite modulus increase varies with the thickness, and is highest for the thinnest metal thicknesses. My work demonstrates that the inherent strong interfaces between graphene and strongly interacting metals like Ni and Pd after synthesis could lead to the manufacturing of composites with significantly higher performances. I also observed increase in toughness and qualitatively different modes of crack propagation owing to the addition of the high stiffness graphene shield on the metal surface during synthesis. Raman spectroscopy and electron imaging of surface reconstructions confirm the high interfacial stresses due to the combination of the lattice mismatch between the graphene and the metals and the kinetics of growth. The findings of this dissertation promote graphene-based thin film composites for flexible electronic devices, and enable fundamental studies of exploiting strain engineering at the graphene-metal interface for electronics, chemistry and mechanics. Furthermore, the results of this dissertation are broadly relevant to the design of bulk graphene-based composite materials

    Evaluation of a new high-throughput method for identifying quorum quenching bacteria

    Get PDF
    Quorum sensing (QS) is a population-dependent mechanism for bacteria to synchronize social behaviors such as secretion of virulence factors. The enzymatic interruption of QS, termed quorum quenching (QQ), has been suggested as a promising alternative anti-virulence approach. In order to efficiently identify QQ bacteria, we developed a simple, sensitive and high-throughput method based on the biosensor Agrobacterium tumefaciens A136. This method effectively eliminates false positives caused by inhibition of growth of biosensor A136 and alkaline hydrolysis of N-acylhomoserine lactones (AHLs), through normalization of beta-galactosidase activities and addition of PIPES buffer, respectively. Our novel approach was successfully applied in identifying QQ bacteria among 366 strains and 25 QQ strains belonging to 14 species were obtained. Further experiments revealed that the QQ strains differed widely in terms of the type ofQQenzyme, substrate specificity and heat resistance. The QQ bacteria identified could possibly be used to control disease in aquaculture

    Image Enhancement via Deep Spatial and Temporal Networks

    Get PDF
    Image enhancement is a classic problem in computer vision and has been studied for decades. It includes various subtasks such as super-resolution, image deblurring, rain removal and denoise. Among these tasks, image deblurring and rain removal have become increasingly active, as they play an important role in many areas such as autonomous driving, video surveillance and mobile applications. In addition, there exists connection between them. For example, blur and rain often degrade images simultaneously, and the performance of their removal rely on the spatial and temporal learning. To help generate sharp images and videos, in this thesis, we propose efficient algorithms based on deep neural networks for solving the problems of image deblurring and rain removal. In the first part of this thesis, we study the problem of image deblurring. Four deep learning based image deblurring methods are proposed. First, for single image deblurring, a new framework is presented which firstly learns how to transfer sharp images to realistic blurry images via a learning-to-blur Generative Adversarial Network (GAN) module, and then trains a learning-to-deblur GAN module to learn how to generate sharp images from blurry versions. In contrast to prior work which solely focuses on learning to deblur, the proposed method learns to realistically synthesize blurring effects using unpaired sharp and blurry images. Second, for video deblurring, spatio-temporal learning and adversarial training methods are used to recover sharp and realistic video frames from input blurry versions. 3D convolutional kernels on the basis of deep residual neural networks are employed to capture better spatio-temporal features, and train the proposed network with both the content loss and adversarial loss to drive the model to generate realistic frames. Third, the problem of extracting sharp image sequences from a single motion-blurred image is tackled. A detail-aware network is presented, which is a cascaded generator to handle the problems of ambiguity, subtle motion and loss of details. Finally, this thesis proposes a level-attention deblurring network, and constructs a new large-scale dataset including images with blur caused by various factors. We use this dataset to evaluate current deep deblurring methods and our proposed method. In the second part of this thesis, we study the problem of image deraining. Three deep learning based image deraining methods are proposed. First, for single image deraining, the problem of joint removal of raindrops and rain streaks is tackled. In contrast to most of prior works which solely focus on the raindrops or rain streaks removal, a dual attention-in-attention model is presented, which removes raindrops and rain streaks simultaneously. Second, for video deraining, a novel end-to-end framework is proposed to obtain the spatial representation, and temporal correlations based on ResNet-based and LSTM-based architectures, respectively. The proposed method can generate multiple deraining frames at a time, which outperforms the state-of-the-art methods in terms of quality and speed. Finally, for stereo image deraining, a deep stereo semantic-aware deraining network is proposed for the first time in computer vision. Different from the previous methods which only learn from pixel-level loss function or monocular information, the proposed network advances image deraining by leveraging semantic information and visual deviation between two views

    InterTracker: Discovering and Tracking General Objects Interacting with Hands in the Wild

    Full text link
    Understanding human interaction with objects is an important research topic for embodied Artificial Intelligence and identifying the objects that humans are interacting with is a primary problem for interaction understanding. Existing methods rely on frame-based detectors to locate interacting objects. However, this approach is subjected to heavy occlusions, background clutter, and distracting objects. To address the limitations, in this paper, we propose to leverage spatio-temporal information of hand-object interaction to track interactive objects under these challenging cases. Without prior knowledge of the general objects to be tracked like object tracking problems, we first utilize the spatial relation between hands and objects to adaptively discover the interacting objects from the scene. Second, the consistency and continuity of the appearance of objects between successive frames are exploited to track the objects. With this tracking formulation, our method also benefits from training on large-scale general object-tracking datasets. We further curate a video-level hand-object interaction dataset for testing and evaluation from 100DOH. The quantitative results demonstrate that our proposed method outperforms the state-of-the-art methods. Specifically, in scenes with continuous interaction with different objects, we achieve an impressive improvement of about 10% as evaluated using the Average Precision (AP) metric. Our qualitative findings also illustrate that our method can produce more continuous trajectories for interacting objects.Comment: IROS 202

    Towards an Effective and Efficient Transformer for Rain-by-snow Weather Removal

    Full text link
    Rain-by-snow weather removal is a specialized task in weather-degraded image restoration aiming to eliminate coexisting rain streaks and snow particles. In this paper, we propose RSFormer, an efficient and effective Transformer that addresses this challenge. Initially, we explore the proximity of convolution networks (ConvNets) and vision Transformers (ViTs) in hierarchical architectures and experimentally find they perform approximately at intra-stage feature learning. On this basis, we utilize a Transformer-like convolution block (TCB) that replaces the computationally expensive self-attention while preserving attention characteristics for adapting to input content. We also demonstrate that cross-stage progression is critical for performance improvement, and propose a global-local self-attention sampling mechanism (GLASM) that down-/up-samples features while capturing both global and local dependencies. Finally, we synthesize two novel rain-by-snow datasets, RSCityScape and RS100K, to evaluate our proposed RSFormer. Extensive experiments verify that RSFormer achieves the best trade-off between performance and time-consumption compared to other restoration methods. For instance, it outperforms Restormer with a 1.53% reduction in the number of parameters and a 15.6% reduction in inference time. Datasets, source code and pre-trained models are available at \url{https://github.com/chdwyb/RSFormer}.Comment: code is available at \url{https://github.com/chdwyb/RSFormer

    Genome analysis of Flaviramulus ichthyoenteri Th78T in the family Flavobacteriaceae: insights into its quorum quenching property and potential roles in fish intestine

    Get PDF
    Background: Intestinal microbes play significant roles in fish and can be possibly used as probiotics in aquaculture. In our previous study, Flaviramulus ichthyoenteri Th78(T), a novel species in the family Flavobacteriaceae, was isolated from fish intestine and showed strong quorum quenching (QQ) ability. To identify the QQ enzymes in Th78(T) and explore the potential roles of Th78(T) in fish intestine, we sequenced the genome of Th78(T) and performed extensive genomic analysis. Results: An N-acyl homoserine lactonase FiaL belonging to the metallo-beta-lactamase superfamily was identified and the QQ activity of heterologously expressed FiaL was confirmed in vitro. FiaL has relatively little similarity to the known lactonases (25.2 similar to 27.9% identity in amino acid sequence). Various digestive enzymes including alginate lyases and lipases can be produced by Th78(T), and enzymes essential for production of B vitamins such as biotin, riboflavin and folate are predicted. Genes encoding sialic acid lyases, sialidases, sulfatases and fucosidases, which contribute to utilization of mucus, are present in the genome. In addition, genes related to response to different stresses and gliding motility were also identified. Comparative genome analysis shows that Th78(T) has more specific genes involved in carbohydrate transport and metabolism compared to other two isolates in Flavobacteriaceae, both isolated from sediments. Conclusions: The genome of Th78(T) exhibits evident advantages for this bacterium to survive in the fish intestine, including production of QQ enzyme, utilization of various nutrients available in the intestine as well as the ability to produce digestive enzymes and vitamins, which also provides an application prospect of Th78(T) to be used as a probiotic in aquaculture

    Restoring Vision in Hazy Weather with Hierarchical Contrastive Learning

    Full text link
    Image restoration under hazy weather condition, which is called single image dehazing, has been of significant interest for various computer vision applications. In recent years, deep learning-based methods have achieved success. However, existing image dehazing methods typically neglect the hierarchy of features in the neural network and fail to exploit their relationships fully. To this end, we propose an effective image dehazing method named Hierarchical Contrastive Dehazing (HCD), which is based on feature fusion and contrastive learning strategies. HCD consists of a hierarchical dehazing network (HDN) and a novel hierarchical contrastive loss (HCL). Specifically, the core design in the HDN is a hierarchical interaction module, which utilizes multi-scale activation to revise the feature responses hierarchically. To cooperate with the training of HDN, we propose HCL which performs contrastive learning on hierarchically paired exemplars, facilitating haze removal. Extensive experiments on public datasets, RESIDE, HazeRD, and DENSE-HAZE, demonstrate that HCD quantitatively outperforms the state-of-the-art methods in terms of PSNR, SSIM and achieves better visual quality.Comment: 30 pages, 10 figure
    • …
    corecore